Telegram Group Search
Quantum Computation Lecture Notes (2022) (❄️ Score: 150+ in 4 days)

Link: https://readhacker.news/s/6vPw5
Comments: https://readhacker.news/c/6vPw5
OxCaml - a set of extensions to the OCaml programming language. (🔥 Score: 154+ in 3 hours)

Link: https://readhacker.news/s/6w6jY
Comments: https://readhacker.news/c/6w6jY
Ask HN: How do I give back to people helped me when I was young and had nothing? (Score: 154+ in 4 hours)

Link: https://readhacker.news/c/6w6e2

Throughout my career, I've received incredible kindness and inspiration from experienced people - professors, and strangers who invested time in me when I feel like I had little to offer in return. While I always express gratitude and try to pay it forward, I often feel there's still an imbalance. I feel like I owe something more direct to the specific people who shaped my life.
How do you meaningfully give back to people who helped you early on (when you literally have nothing...haha)?
What forms of gratitude have you found most meaningful?
Appreciate any comments.
Show HN: Tool-Assisted Speedrunning the Boring Parts of Animal Crossing (GCN) (Score: 150+ in 1 day)

Link: https://readhacker.news/s/6w2Mh
Comments: https://readhacker.news/c/6w2Mh

I recently dug my Nintendo GameCube out of storage to revisit the first Animal Crossing game. Things were mostly as I remembered, but the game's heavy reliance on a clunky on-screen keyboard quickly wore my patience thin.
Unwilling to accept this subpar experience, I did what any rational person would do and ordered a rare, Japan-exclusive, keyboard/controller hybrid on eBay, then used a Raspberry Pi Pico to 1. listen for keypresses and 2. send simulated controller events to the GameCube, automating typing in Animal Crossing at a Tool-Assisted Speedrun level.
Of course, this oddball controller's keycaps didn't map perfectly to Animal Crossing's in-game character set, so I watched a 10 hour FreeCAD tutorial at 2x speed, then modeled the 7 keycap profiles to create 81 custom, 3D printed keycaps, taking care to include even the most esoteric Greek and Old English characters that Nintendo chose to include in the game.
And then, having solved my original problem, I decided to sniff out some new ones.
I used my homemade TAS device to automate the entry of customizable "Town Tune" melodies, took advantage of a cracked encryption algorithm to give on-demand access to (almost) every item in the game, and, in a Club-Mate-fueled haze, whipped up a Python script to convert arbitrary images to the game's 32x32 pixel custom design format.
Even at superhuman speed, those 1024 pixels took about 3 minutes to input, but that didn't stop me from extending the concept to video - playing Rick Astley's "Never Gonna Give You Up", Bad Apple!, Shrek, and even a short gameplay video of DOOM very, veryyyy slowly (about 7.5 hours to render 30 seconds of footage at 5fps).
Then, realizing that DOOM at 0.0056fps probably wouldn't be the most "playable" thing in the world, I set out to get some kind of video game running within Animal Crossing, and ultimately landed on Snake.
Since it only needs to update 1 pixel for every frame of animation, I was able to get Snake running at around 1ish* frames per second (for technical reasons, it runs at a variable framerate).
Maybe not the most primo experience the modern gaming world has to offer, but without a doubt, technically a video game. It even has its own, in-memory high score ranking (so far I'm undefeated).
The code and design files are distributed for free on GitHub[0], and a build/demonstration video[1] is out now on Youtube.
[0] - https://github.com/hunterirving/pico-crossing
[1] - https://www.youtube.com/watch/vYw8Alf_lolA
It started as a "quick, simple project", then quickly ballooned into 7 or 8 "quick, simple projects", but I had a ton of fun putting it all together. Thanks for checking it out!
Show HN: Tattoy – a text-based terminal compositor (Score: 150+ in 11 hours)

Link: https://readhacker.news/s/6w6hw
Comments: https://readhacker.news/c/6w6hw

Whereas this is mostly a terminal eye-candy project to get you street cred, it does have some serious aspects.
Firstly it solves the age-old problem of low-contrast text, like when you `ls` a broken symlink and the red background colour is too near your current theme's foreground colour. Tattoy solves this by using none other than the web's WCAG 2.1 contrast algorithm for accessible text.
Secondly, an explicit design goal is that Tattoy should be able to polyfill new terminal protocols, the `xwayland` of the TTY if you will. Say if we want to experiment with completely deprecating ANSI codes, then any application that uses a new protocol can be run in Tattoy which itself runs in any ANSI-standard terminal emulator as normal. You can read more about this idea here: https://tattoy.sh/news/an-end-to-terminal-ansi-codes/
But ultimately this has been something more akin to an art project, something to enjoy for the sheer aesthetic pleasure.
TimeGuessr (❄️ Score: 151+ in 4 days)

Link: https://readhacker.news/s/6vSW9
Comments: https://readhacker.news/c/6vSW9
SIMD-friendly algorithms for substring searching (2018) (Score: 150+ in 12 hours)

Link: https://readhacker.news/s/6w7Zb
Comments: https://readhacker.news/c/6w7Zb
Launch HN: Chonkie (YC X25) – Open-Source Library for Advanced Chunking (❄️ Score: 150+ in 5 days)

Link: https://readhacker.news/c/6vQEL

Hey HN! We're Shreyash and Bhavnick. We're building Chonkie (https://chonkie.ai), an open-source library for chunking and embedding data.
Python: https://github.com/chonkie-inc/chonkie
TypeScript: https://github.com/chonkie-inc/chonkie-ts
Here's a video showing our code chunker: https://youtu.be/Xclkh6bU1P0.
Bhavnick and I have been building personal projects with LLMs for a few years. For much of this time, we found ourselves writing our own chunking logic to support RAG applications. We often hesitated to use existing libraries because they either had only basic features or felt too bloated (some are 80MB+).
We built Chonkie to be lightweight, fast, extensible, and easy. The space is evolving rapidly, and we wanted Chonkie to be able to quickly support the newest strategies. We currently support: Token Chunking, Sentence Chunking, Recursive Chunking, Semantic Chunking, plus:
- Semantic Double Pass Chunking: Chunks text semantically first, then merges closely related chunks.
- Code Chunking: Chunks code files by creating an AST and finding ideal split points.
- Late Chunking: Based on the paper (https://arxiv.org/abs/2409.04701), where chunk embeddings are derived from embedding a longer document.
- Slumber Chunking: Based on the "Lumber Chunking" paper (https://arxiv.org/abs/2406.17526). It uses recursive chunking, then an LLM verifies split points, aiming for high-quality chunks with reduced token usage and LLM costs.
You can see how Chonkie compares to LangChain and LlamaIndex in our benchmarks: https://github.com/chonkie-inc/chonkie/blob/main/BENCHMARKS....
Some technical details about the Chonkie package: - ~15MB default install vs. ~80-170MB for some alternatives. - Up to 33x faster token chunking compared to LangChain and LlamaIndex in our tests. - Works with major tokenizers (transformers, tokenizers, tiktoken). - Zero external dependencies for basic functionality. - Implements aggressive caching and precomputation. - Uses running mean pooling for efficient semantic chunking. - Modular dependency system (install only what you need).
In addition to chunking, Chonkie also provides an easy way to create embeddings. For supported providers (SentenceTransformer, Model2Vec, OpenAI), you just specify the model name as a string. You can also create custom embedding handlers for other providers.
RAG is still the most common use case currently. However, Chonkie makes chunks that are optimized for creating high quality embeddings and vector retrieval, so it is not really tied to the "generation" part of RAG. In fact, We're seeing more and more people use Chonkie for implementing semantic search and/or setting context for agents.
We are currently focused on building integrations to simplify the retrieval process. We've created "handshakes" – thin functions that interact with vector DBs like pgVector, Chroma, TurboPuffer, and Qdrant, allowing you to interact with storage easily. If there's an integration you'd like to see (vector DB or otherwise), please let us know.
We also offer hosted and on-premise versions with OCR, extra metadata, all embedding providers, and managed vector databases for teams that want a fully managed pipeline. If you're interested, reach out at [email protected] or book a demo: https://cal.com/shreyashn/chonkie-demo.
We're eager to hear your feedback and comments! Thanks!
2025/06/14 16:22:38
Back to Top
HTML Embed Code: